sensory representation
Does Thought Require Sensory Grounding? From Pure Thinkers to Large Language Models
Does Thought Require Sensory Grounding? Presidential Address delivered under the title "Can a Large Language Model Think?" at the one hundred nineteenth Eastern Division meeting of the American Philosophical Association on January 6, 2023. Does the capacity to think require the capacity to sense? A lively debate on this topic runs throughout the history of philosophy and now animates discussions of artificial intelligence. In favor of a positive answer, Aristotle says, "The soul never thinks without an image." Aquinas says, "There's nothing in the intellect that wasn't previously in the senses." Hume says, "All our simple ideas in their first appearance are derived from simple impressions." With some minimal assumptions, all three of these statements suggest that thinking requires the capacity to sense, or at least requires having had the capacity to sense at some point. Contrasting with these empiricist theses, rationalist philosophers have often denied that thinking requires sensing. Plato holds that we can think about the forms before we have senses and a body. Descartes holds that the pure intellect thinks independently of the senses. Navigating between empiricism and rationalism, Kant discusses the issue extensively ("Thoughts without content are empty"); unsurprisingly, his final views on the matter are complicated. In recent decades, this philosophical debate has become central to debates in artificial intelligence and cognitive science. He and others held that for symbols to have meaning, they must be causally grounded in sensory connections to the environment. To be meaningful, the symbol "RED" must be grounded in seeing red. The symbol "WATER" must be grounded in a sensory connection to water. If we assume that thinking and meaning go together in AI systems, then this amounts to another version of the thesis that thinking requires sensing. In the last few years, discussion of symbol grounding has become especially widespread in the debate over large language models (LLMs) such as the GPT systems. Can large language models think, mean, or understand?
- North America > United States > New York (0.04)
- North America > United States > Minnesota (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- (3 more...)
- Education (0.46)
- Health & Medicine (0.46)
Do You Want AI to Be Conscious? - Issue 102: Hidden Truths
People often ask me whether human-level artificial intelligence will eventually become conscious. My response is: Do you want it to be conscious? I think it is largely up to us whether our machines will wake up. The mechanisms of consciousness--the reasons we have a vivid and direct experience of the world and of the self--are an unsolved mystery in neuroscience, and some people think they always will be; it seems impossible to explain subjective experience using the objective methods of science. But in the 25 or so years that we've taken consciousness seriously as a target of scientific scrutiny, we have made significant progress.
The Brain 'Rotates' Memories to Save Them From New Sensations
During every waking moment, we humans and other animals have to balance on the edge of our awareness of past and present. We must absorb new sensory information about the world around us while holding on to short-term memories of earlier observations or events. Our ability to make sense of our surroundings, to learn, to act, and to think all depend on constant, nimble interactions between perception and memory. Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research develop ments and trends in mathe matics and the physical and life sciences. But to accomplish this, the brain has to keep the two distinct; otherwise, incoming data streams could interfere with representations of previous stimuli and cause us to overwrite or misinterpret important contextual information.
Representation Learning in Partially Observable Environments using Sensorimotor Prediction
Kulak, Thibaut, Ortiz, Michael Garcia
Autonomous Learning for Robotics aims to endow (robotic) agents with the capability to learn from and act in their environment, so that it can adapt to previously unseen situations. In order to be able to learn from this interaction, an agent has to build compact representations of the environments, using information captured from a high dimensional raw input. Current approaches favor the learning of representations using Deep Neural Networks ([1], [2], [3]). Supervised learning extracts representations from the data to solve a classification task, providing the agent with hierarchical compact representations of different sensory streams ([4], [5]). However, these state-of-the-art machine learning algorithms are not suitable for autonomous learning, as they rely on labeled data, which are costly to acquire, and are constraining the representations on the classes they were trained on. Unsupervised learning allows to learn hierarchical compression for different data streams ([6], [7], [8]).
We Need Conscious Robots - Issue 47: Consciousness
People often ask me whether human-level artificial intelligence will eventually become conscious. My response is: Do you want it to be conscious? I think it is largely up to us whether our machines will wake up. The mechanisms of consciousness--the reasons we have a vivid and direct experience of the world and of the self--are an unsolved mystery in neuroscience, and some people think they always will be; it seems impossible to explain subjective experience using the objective methods of science. But in the 25 or so years that we've taken consciousness seriously as a target of scientific scrutiny, we have made significant progress.
- North America > United States > New York (0.04)
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.04)
A Cognitive Agent Model Incorporating Prior and Retrospective Ownership States for Actions
Treur, Jan (VU University Amsterdam, Agent Systems Research Group)
The cognitive agent model presented in this paper generates prior and retrospective ownership states for an action based on principles from recent neuro-logical theories. A prior ownership state is affected by prediction of the effects of a prepared action, and exerts control by strengthening or suppressing actual execution of the action. A retrospective ownership state depends on whether the sensed consequences co-occur with the predicted consequences, and is the basis for acknowledging authorship of actions, for example, in social context. It is shown how poor action effect prediction capabilities can lead to reduced retrospective ownership states, as in persons suffering from schizophrenia.
A Cognitive Agent Model Displaying and Regulating Different Social Response Patterns
Treur, Jan (VU University Amsterdam, Agent Systems Research Group)
Differences in social responses of individuals can often be related to differences in functioning of neurological mechanisms. This paper presents a cognitive agent model capable of showing different types of social response patterns based on such mechanisms, adopted from theories on mirror neuron systems, emotion regulation, empathy, and autism spectrum disorders. The presented agent model provides a basis for human-like social response patterns of virtual agents in the context of simulation-based training (e.g., for training of therapists), gaming, or for agent-based generation of virtual stories.
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- North America > United States > New York (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
A Biologically Plausible Algorithm for Reinforcement-shaped Representational Learning
Significant plasticity in sensory cortical representations can be driven in mature animals either by behavioural tasks that pair sensory stimuli with reinforcement, or by electrophysiological experiments that pair sensory input with direct stimulation of neuromodulatory nuclei, but usually not by sensory stimuli presented alone. Biologically motivated theories of representational learning, however, have tended to focus on unsupervised mechanisms, which may play a significant role on evolutionary or developmental timescales, but which neglect this essential role of reinforcement in adult plasticity. By contrast, theoretical reinforcement learning has generally dealt with the acquisition of optimal policies for action in an uncertain world, rather than with the concurrent shaping of sensory representations. This paper develops a framework for representational learning which builds on the relative success of unsupervised generativemodelling accounts of cortical encodings to incorporate the effects of reinforcement in a biologically plausible way.
- North America > United States > California > San Francisco County > San Francisco (0.28)
- Asia > Middle East > Jordan (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
A Biologically Plausible Algorithm for Reinforcement-shaped Representational Learning
Significant plasticity in sensory cortical representations can be driven in mature animals either by behavioural tasks that pair sensory stimuli with reinforcement, or by electrophysiological experiments that pair sensory input with direct stimulation of neuromodulatory nuclei, but usually not by sensory stimuli presented alone. Biologically motivated theories of representational learning, however, have tended to focus on unsupervised mechanisms, which may play a significant role on evolutionary or developmental timescales, but which neglect this essential role of reinforcement in adult plasticity. By contrast, theoretical reinforcement learning has generally dealt with the acquisition of optimal policies for action in an uncertain world, rather than with the concurrent shaping of sensory representations. This paper develops a framework for representational learning which builds on the relative success of unsupervised generativemodelling accounts of cortical encodings to incorporate the effects of reinforcement in a biologically plausible way.
- North America > United States > California > San Francisco County > San Francisco (0.28)
- Asia > Middle East > Jordan (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
A Biologically Plausible Algorithm for Reinforcement-shaped Representational Learning
Significant plasticity in sensory cortical representations can be driven in mature animals either by behavioural tasks that pair sensory stimuli with reinforcement, or by electrophysiological experiments that pair sensory input with direct stimulation of neuromodulatory nuclei, but usually not by sensory stimuli presented alone. Biologically motivated theories of representational learning, however, have tended to focus on unsupervised mechanisms, which may play a significant role on evolutionary or developmental timescales,but which neglect this essential role of reinforcement in adult plasticity. By contrast, theoretical reinforcement learning has generally dealt with the acquisition of optimal policies for action in an uncertain world, rather than with the concurrent shaping of sensory representations. This paper develops a framework for representational learning which builds on the relative success of unsupervised generativemodelling accountsof cortical encodings to incorporate the effects of reinforcement in a biologically plausible way.
- North America > United States > California > San Francisco County > San Francisco (0.28)
- Asia > Middle East > Jordan (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)